12 research outputs found

    Stereoscopic image quality assessment by deep convolutional neural network

    Get PDF
    The final publication is available at Elsevier via https://doi.org/10.1016/j.jvcir.2018.12.006. © 2018 This manuscript version is made available under the CC-BY-NC-ND 4.0 license http://creativecommons.org/licenses/by-nc-nd/4.0/In this paper, we propose a no-reference (NR) quality assessment method for stereoscopic images by deep convolutional neural network (DCNN). Inspired by the internal generative mechanism (IGM) in the human brain, which shows that the brain first analyzes the perceptual information and then extract effective visual information. Meanwhile, in order to simulate the inner interaction process in the human visual system (HVS) when perceiving the visual quality of stereoscopic images, we construct a two-channel DCNN to evaluate the visual quality of stereoscopic images. First, we design a Siamese Network to extract high-level semantic features of left- and right-view images for simulating the process of information extraction in the brain. Second, to imitate the information interaction process in the HVS, we combine the high-level features of left- and right-view images by convolutional operations. Finally, the information after interactive processing is used to estimate the visual quality of stereoscopic image. Experimental results show that the proposed method can estimate the visual quality of stereoscopic images accurately, which also demonstrate the effectiveness of the proposed two-channel convolutional neural network in simulating the perception mechanism in the HVS.This work was supported in part by the Natural Science Foundation of China under Grant 61822109 and 61571212, Fok Ying Tung Education Foundation under Grant 161061 and by the Natural Science Foundation of Jiangxi under Grant 20181BBH80002

    Study of spatio-temporal modeling in video quality assessment

    Get PDF
    Video quality assessment (VQA) has received remarkable attention recently. Most of the popular VQA models employ recurrent neural networks (RNNs) to capture the temporal quality variation of videos. However, each long-term video sequence is commonly labeled with a single quality score, with which RNNs might not be able to learn long-term quality variation well. A natural question then arises: What’s the real role of RNNs in learning the visual quality of videos? Does it learn spatio-temporal representation as expected or just aggregating spatial features redundantly? In this study, we conduct a comprehensive study by training a family of VQA models with carefully designed frame sampling strategies and spatio-temporal fusion methods. Our extensive experiments on four publicly available in-the-wild video quality datasets lead to two main findings. First, the plausible spatio-temporal modeling module ( i.e ., RNNs) does not facilitate quality-aware spatio-temporal feature learning. Second, sparsely sampled video frames are capable of obtaining the competitive performance against using all video frames as the input. In other words, spatial features play a vital role in capturing video quality variation for VQA. To our best knowledge, this is the first work to explore the issue of spatio-temporal modeling in VQA

    No Reference Quality Assessment for 3D Synthesized Views by Local Structure Variation and Global Naturalness Change

    No full text

    No reference quality assessment for screen content images with both local and global feature representation

    No full text
    In this paper, we propose a novel no reference quality assessment method by incorporating statistical luminance and texture features (NRLT) for screen content images (SCIs) with both local and global feature representation. The proposed method is designed inspired by the perceptual property of the human visual system (HVS) that the HVS is sensitive to luminance change and texture information for image perception. In the proposed method, we first calculate the luminance map through the local normalization, which is further used to extract the statistical luminance features in global scope. Second, inspired by existing studies from neuroscience that high-order derivatives can capture image texture, we adopt four filters with different directions to compute gradient maps from the luminance map. These gradient maps are then used to extract the second-order derivatives by local binary pattern. We further extract the texture feature by the histogram of high-order derivatives in global scope. Finally, support vector regression is applied to train the mapping function from quality-aware features to subjective ratings. Experimental results on the public large-scale SCI database show that the proposed NRLT can achieve better performance in predicting the visual quality of SCIs than relevant existing methods, even including some full reference visual quality assessment methods

    Learning a No-Reference Quality Predictor of Stereoscopic Images by Visual Binocular Properties

    No full text
    International audienc

    Dynamic Compressive Mechanical Behavior and Microstructure Evolution of Rolled Fe-28Mn-10Al-1.2C Low-Density Steel

    No full text
    In this study, the quasi-static and dynamic compressive mechanical behavior of a rolled Fe-28Mn-10Al-1.2C steel (low-density) was investigated. X-ray diffraction, optical microscopy, electron backscattered diffraction and transmission electron microscopy were conducted to characterize the microstructure evolution. The results displayed that the steel has remarkable strain rate sensitivity and strong strain hardenability under high strain rate compression. Most specifically, the deformation behavior was changed with the increase in the strain rate. A feasible mathematical analysis for the calculation of stacking fault energies and the critical resolve shear stresses for twinning was employed and discussed the nucleation of the twinning. The microband-induced plasticity and twinning-induced plasticity controlled the deformation under high strain rate compression and provided a strong strain hardening effect. The higher mechanical response can increase the broad use of low-density steel in automobile applications

    A Novel lncRNA, LINC00460, Affects Cell Proliferation and Apoptosis by Regulating KLF2 and CUL4A Expression in Colorectal Cancer

    No full text
    Emerging evidence has proven that long noncoding RNAs (lncRNAs) play important roles in human colorectal cancer (CRC) biology, although few lncRNAs have been characterized in CRC. Therefore, the functional significance of lncRNAs in the malignant progression of CRC still needs to be further explored. In this study, through analyzing TCGA RNA sequencing data and other publicly available microarray data, we found a novel lncRNA, LINC00460, whose expression was significantly upregulated in CRC tissues compared to adjacent normal tissues. Consistently, real-time qPCR results also verified that LINC00460 was overexpressed in CRC tissues and cells. Furthermore, high LINC00460 expression levels in CRC specimens were correlated with larger tumor size, advanced tumor stage, lymph node metastasis and shorter overall survival. In vitro and in vivo assays of LINC00460 alterations revealed a complex integrated phenotype affecting cell growth and apoptosis. Mechanistically, LINC00460 repressed Krüppel-like factor 2 (KLF2) transcription by binding to enhancer of zeste homolog 2 (EZH2). LINC00460 also functioned as a molecular sponge for miR-149-5p, antagonizing its ability to repress cullin 4A (CUL4A) protein translation. Taken together, our findings support a model in which the LINC00460/EZH2/KLF2 and LINC00460/miR-149-5p/CUL4A crosstalk serve as critical effectors in CRC tumorigenesis and progression, suggesting new therapeutic directions in CRC. Keywords: LINC00460, KLF2, CUL4A, proliferation, apoptosis, colorectal cance
    corecore